30 research outputs found

    Exploiting Evolutionary Modeling to Prevail in Iterated Prisoner’s Dilemma Tournaments

    Get PDF
    The iterated prisoner’s dilemma is a famous model of cooperation and conflict in game theory. Its origin can be traced back to the Cold War, and countless strategies for playing it have been proposed so far, either designed by hand or automatically generated by computers. In the 2000s, scholars started focusing on adaptive players, that is, able to classify their opponent’s behavior and adopt an effective counter-strategy. The player presented in this paper, pushes such idea even further: it builds a model of the current adversary from scratch, without relying on any pre-defined archetypes, and tweaks it as the game develops using an evolutionary algorithm; at the same time, it exploits the model to lead the game into the most favorable continuation. Models are compact non-deterministic finite state machines; they are extremely efficient in predicting opponents’ replies, without being completely correct by necessity. Experimental results show that such player is able to win several one-to- one games against strong opponents taken from the literature, and that it consistently prevails in round-robin tournaments of different sizes

    New Techniques to Reduce the Execution Time of Functional Test Programs

    Get PDF
    The compaction of test programs for processor-based systems is of utmost practical importance: Software-Based Self-Test (SBST) is nowadays increasingly adopted, especially for in-field test of safety-critical applications, and both the size and the execution time of the test are critical parameters. However, while compacting the size of binary test sequences has been thoroughly studied over the years, the reduction of the execution time of test programs is still a rather unexplored area of research. This paper describes a family of algorithms able to automatically enhance an existing test program, reducing the time required to run it and, as a side effect, its size. The proposed solutions are based on instruction removal and restoration, which is shown to be computationally more efficient than instruction removal alone. Experimental results demonstrate the compaction capabilities, and allow analyzing computational costs and effectiveness of the different algorithms

    A Functional Approach for Testing the Reorder Buffer Memory

    Get PDF
    Superscalar processors may have the ability to execute instructions out-of-order to better exploit the internal hardware and to maximize the performance. To maintain the in-order instructions commitment and to guarantee the correctness of the final results (as well as precise exception management), the Reorder Buffer (ROB) is used. From the architectural point of view, the ROB is a memory array of several thousands of bits that must be tested against hardware faults to ensure a correct behavior of the processor. Since it is deeply embedded within the microprocessor circuitry, the most straightforward approach to test the ROB is through Built-In Self-Test solutions, which are typically adopted by manufacturers for end-of-production test. However, these solutions may not always be used for the test during the operational phase (in-field test) which aims at detecting possible hardware faults arising when the electronic systems works in its target environment. In fact, these solutions require the usage of test infrastructures that may not be accessible and/or documented, or simply not usable during the operational phase. This paper proposes an alternative solution, based on a functional approach, in which the test is performed by forcing the processor to execute a specially written test program, and checking the behavior of the processor. This approach can be adopted for in-field test, e.g., at the power-on, power-off, or during the time slots unused by the system application. The method has been validated resorting to both an architectural and a memory fault simulator

    Optimizing groups of colluding strong attackers in mobile urban communication networks with evolutionary algorithms

    Get PDF
    In novel forms of the Social Internet of Things, any mobile user within communication range may help routing messages for another user in the network. The resulting message delivery rate depends both on the users’ mobility patterns and the message load in the network. This new type of configuration, however, poses new challenges to security, amongst them, assessing the effect that a group of colluding malicious participants can have on the global message delivery rate in such a network is far from trivial. In this work, after modeling such a question as an optimization problem, we are able to find quite interesting results by coupling a network simulator with an evolutionary algorithm. The chosen algorithm is specifically designed to solve problems whose solutions can be decomposed into parts sharing the same structure. We demonstrate the effectiveness of the proposed approach on two medium-sized Delay-Tolerant Networks, realistically simulated in the urban contexts of two cities with very different route topology: Venice and San Francisco. In all experiments, our methodology produces attack patterns that greatly lower network performance with respect to previous studies on the subject, as the evolutionary core is able to exploit the specific weaknesses of each target configuration.<br/

    Identification and Rejuvenation of NBTI-Critical Logic Paths in Nanoscale Circuits

    Get PDF
    The Negative Bias Temperature Instability (NBTI) phenomenon is agreed to be one of the main reliability concerns in nanoscale circuits. It increases the threshold voltage of pMOS transistors, thus, slows down signal propagation along logic paths between flip-flops. NBTI may cause intermittent faults and, ultimately, the circuit’s permanent functional failures. In this paper, we propose an innovative NBTI mitigation approach by rejuvenating the nanoscale logic along NBTI-critical paths. The method is based on hierarchical identification of NBTI-critical paths and the generation of rejuvenation stimuli using an Evolutionary Algorithm. A new, fast, yet accurate model for computation of NBTI-induced delays at gate-level is developed. This model is based on intensive SPICE simulations of individual gates. The generated rejuvenation stimuli are used to drive those pMOS transistors to the recovery phase, which are the most critical for the NBTI-induced path delay. It is intended to apply the rejuvenation procedure to the circuit, as an execution overhead, periodically. Experimental results performed on a set of designs demonstrate reduction of NBTI-induced delays by up to two times with an execution overhead of 0.1 % or less. The proposed approach is aimed at extending the reliable lifetime of nanoelectronics

    High-sensitivity visualisation of contaminants in heparin samples by spectral filtering of H-1 NMR spectra

    Get PDF
    A novel application of two-dimensional correlation analysis has been employed to filter H-1 NMR heparin spectra distinguishing acceptable natural variation and the presence of foreign species. Analysis of contaminated heparin samples, compared to a dataset of accepted heparin samples using two-dimensional correlation spectroscopic analysis of their 1-dimensional H-1 NMR spectra, allowed the spectral features of contaminants to be recovered with high sensitivity, without having to resort to more complicated NMR experiments. Contaminants, which exhibited features distinct from those of heparin and those with features normally hidden within the spectral mass of heparin could be distinguished readily. A heparin sample which had been pre-mixed with a known contaminant, oversulfated chondroitin sulfate (OSCS), was tested against the heparin reference library. It was possible to recover the 1 H NMR spectrum of the OSCS component through difference 2D-COS power spectrum analysis of as little as 0.25% (w/w) with ease, and of 2% (w/w) for more challenging contaminants, whose NMR signals fell under those of heparin. the approach shows great promise for the quality control of heparin and provides the basis for greatly improved regulatory control for the analysis of heparin, as well as other intrinsically heterogeneous and varied products.Wellcome TrustRoyal SocietyBBSRCFinlambardia SPA 'Fondo per la promozione di Accordi Istituzionali'Univ Liverpool, Sch Biol Sci, Liverpool L69 3BX, Merseyside, EnglandIst Ric Chim & Biochim G Ronzoni, I-20133 Milan, ItalyUNIFESP Universidade Federal de São Paulo, Dept Bioquim, Disciplina Biol Mol, BR-04044020 São Paulo, BrazilKeele Univ, Inst Sci & Technol Med, Keele ST5 5BG, Staffs, EnglandNatl Inst Biol Stand & Controls, Potters Bar EN6 3QG, Herts, EnglandUNIFESP Universidade Federal de São Paulo, Dept Bioquim, Disciplina Biol Mol, BR-04044020 São Paulo, BrazilWeb of Scienc

    Advanced Techniques for Solving Optimization Problems through Evolutionary Algorithms

    No full text
    Evolutionary algorithms (EAs) are machine-learning techniques that can be exploited in several applications in optimization problems in different fields. Even though the first works on EAs appeared in the scientic literature back in the 1960s, they cannot be considered a mature technology, yet. Brand new paradigms as well as improvements to existing ones are continuously proposed by scholars and practitioners. This thesis describes the activities performed on GP , an existing EA toolkit developed in Politecnico di Torino since 2002. The works span from the design and experimentation of new technologies, to the application of the toolkit to specic industrial problems. More in detail, some studies addressed during these three years targeted: the realization of an optimal process to select genetic operators during the optimization process; the definition of a new distance metric able to calculate differences between individuals and maintaining diversity within the population (diversity preservation); the design and implementation of a new cooperative approach to the evolution able to group individuals in order to optimize a set of sub-optimal solutions instead of optimizing only one individual

    An Efficient Distance Metric for Linear Genetic Programming

    No full text
    Defining a distance measure over the individuals in the population of an Evolutionary Algorithm can be exploited for several applications, ranging from diversity preservation to balancing exploration and exploitation. When individuals are encoded as strings of bits or sets of real values, computing the distance between any two can be a straightforward process; when individuals are represented as trees or linear graphs, however, quite often the user must resort to phenotype-level problem-specific distance metrics. This paper presents a generic genotype-level distance metric for Linear Genetic Programming: the information contained by an individual is represented as a set of symbols, using n-grams to capture significant recurring structures inside the genome. The difference in information between two individuals is evaluated resorting to a symmetric difference. Experimental evaluations show that the proposed metric has a strong correlation with phenotype-level problem-specific distance measures in two problems where individuals represent string of bits and Assembly-language programs, respectively

    On Test Program Compaction

    No full text
    While compaction of binary test sequences for generic sequential circuits has been widely explore, the compaction of test programs for processor based systems is still an open area of research. Test program compaction is practically important because there are several scenarios in which Software-based Self-Test (SBST) is adopted, and the size of the test program is often a critical parameter. This paper is among the first to propose algorithms able to automatically compact an existing test program. The proposed solution is based on instruction removal and restoration, which is shown to significantly reduce the computational cost compared with instruction removal alone. Experimental results are reported, showing the compaction capabilities and computational costs of the proposed algorithms

    Universal information distance for genetic programming

    No full text
    This paper presents a genotype-level distance metric for Genetic Programming (GP) based on the symmetric difference concept: first, the information contained in individuals is expressed as a set of symbols (the content of each node, its position inside the tree, and recurring parent-child structures); then, the difference between two individuals is computed considering the number of elements belonging to one, but not both, of their symbol sets
    corecore